fix(FLOW-10): Enforce fair FIFO queue ordering on-chain#46
Conversation
ebe63d0 to
8ad764e
Compare
9a20c8f to
5717ad2
Compare
74c948d to
371f132
Compare
Review: fix(FLOW-10) — Enforce fair FIFO queue ordering on-chainThe mapping-based queue (head/tail pointers) is a sound replacement for the O(n)-shift array. The validation loop in Three concrete issues found: 1. Dead
|
71b9aeb to
9d7d107
Compare
1760973 to
95d1302
Compare
95d1302 to
0da95a3
Compare
0da95a3 to
76c7bec
Compare
5267d82 to
a1691be
Compare
a1691be to
2749c7f
Compare
2749c7f to
a8d403a
Compare
a8d403a to
7001cdf
Compare
| * | ||
| * @param requestId The request ID to remove from the pending requests queue. | ||
| */ | ||
| function _dropQueuedRequest(uint256 requestId) internal { |
There was a problem hiding this comment.
O(n) regression for cancelRequest
_dropQueuedRequest does a linear scan-and-shift across the entire queue, replacing the former O(1) swap-and-pop (which used _requestIndexInGlobalArray). This is the correct trade-off to maintain FIFO order, but it means that every user-initiated cancelRequest now costs gas proportional to queue depth, not O(1).
With maxPendingRequestsPerUser = 10 and N active users the queue is bounded at 10 × N entries, so gas stays manageable at current scale. Worth noting explicitly in the migration notes so operators are aware that raising maxPendingRequestsPerUser amplifies cancellation cost.
There was a problem hiding this comment.
cancelRequest previously used _removePendingRequest(requestId);, which stated:
// === GLOBAL PENDING ARRAY REMOVAL ===
// Uses O(1) lookup + O(n) shift to maintain FIFO order
// FIFO order is critical for DeFi fairness - requests must be processed in submission orderSo this was also O(n).
| // === VALIDATION === | ||
| Request storage request = requests[reqId]; | ||
| if (request.status != RequestStatus.PENDING) | ||
| revert InvalidRequestState(); |
There was a problem hiding this comment.
This InvalidRequestState check is dead code. The _requestsQueue mapping only ever holds IDs for requests that are currently PENDING — every transition out of PENDING (_dequeueRequest, _dropQueuedRequest) removes the ID from the queue first. So reqId read from the queue is always PENDING here.
Leaving dead code in a hot code path is misleading: it suggests to future readers that non-PENDING entries can appear in the queue, which they cannot. Consider removing it (or replacing with a assert-style comment if you want to document the invariant).
| requestFound = true; | ||
| } | ||
|
|
||
| // Shift the matching request to the queue's tail, then delete it |
There was a problem hiding this comment.
Inaccurate comment. The algorithm does not "shift the matching request to the tail." It shifts all elements after the match one slot to the left (closing the gap), and then deletes the now-duplicated last slot. A clearer description:
// Once found, shift each subsequent element one position left,
// then delete the last (now-duplicate) slot at tail-1.
| uint256 requestId; | ||
| // If reqId is 0, it means we went over the boundaries of | ||
| // _requestsQueue. | ||
| uint256 reqId = _requestsQueue[_requestsQueueHead+i]; |
There was a problem hiding this comment.
The reqId == 0 guard fires only inside the if (j < successfulRequestIds.length) and if (k < rejectedRequestIds.length) branches. When both arrays are exhausted midway through the loop (impossible in practice because totalRequests = j_max + k_max keeps j + k == i+1 at every iteration), a zero value at a queue slot would silently skip past the guard and reach the RequestProcessOutOfOrder revert instead.
This is unreachable today, but worth noting that the "went over the boundaries" invariant is only protected within each branch. The loop invariant j + k == i+1 (one match per iteration) prevents this from being exploitable, but it is worth a clarifying comment.
| vm.stopPrank(); | ||
|
|
||
| vm.startPrank(coa); | ||
| // First 3 successful, transition to PROCESSING |
There was a problem hiding this comment.
Good test. There is a symmetric gap worth covering: a request that the COA lists in successfulRequestIds is user-cancelled before startProcessingBatch executes. The expected behaviour is identical — the queue has shifted, so the validation loop fires RequestProcessOutOfOrder at the position where the cancelled request used to sit — but an explicit test would document that invariant.
| // === VALIDATION === | ||
| Request storage request = requests[reqId]; | ||
| if (request.status != RequestStatus.PENDING) | ||
| revert InvalidRequestState(); |
There was a problem hiding this comment.
Dead code — InvalidRequestState can never fire here.
reqId is read from _requestsQueue[_requestsQueueHead+i], which is non-zero (checked by the reqId == 0 guards above). The queue invariant guarantees every slot in [_requestsQueueHead, _requestsQueueTail) holds a PENDING request — only PENDING requests are enqueued, and they are removed (via _dequeueRequest or _dropQueuedRequest) the moment their status changes. So request.status is always PENDING here, and revert InvalidRequestState() is unreachable.
The check should be removed. Leaving dead revert code in a validation path is misleading: a reader (or auditor) will assume it guards a real code path and waste time reasoning about it, and any future queue-entry that somehow carries a non-PENDING status would silently mask the real RequestProcessOutOfOrder error with a confusing one.
| // === VALIDATION === | |
| Request storage request = requests[reqId]; | |
| if (request.status != RequestStatus.PENDING) | |
| revert InvalidRequestState(); | |
| // requestId currently holds the last-assigned candidate |
| } | ||
|
|
||
| // Decrement the queue tail only if the given requestId was found | ||
| if (!requestFound) revert RequestNotFound(); |
There was a problem hiding this comment.
Wrong error selector — should be RequestNotInQueue(requestId), not RequestNotFound().
RequestNotFound() means "no entry in the requests mapping" (see its uses in cancelRequest, completeProcessing, _startProcessingInternal). But here the request definitely exists — callers of _dropQueuedRequest only reach this point after confirming request.status == PENDING, which requires the request to have been initialised. The missing invariant is that the PENDING request isn't found in the queue (queue corruption).
Using RequestNotFound() also conflicts with how the error is used elsewhere and makes RequestNotInQueue a partially-dead code path for corruption scenarios.
| if (!requestFound) revert RequestNotFound(); | |
| if (!requestFound) revert RequestNotInQueue(requestId); |
| function test_StartProcessingBatch_RevertRequestNotInQueueWhenQueueEmpty() public { | ||
| vm.startPrank(coa); | ||
| vm.expectRevert(abi.encodeWithSelector( | ||
| FlowYieldVaultsRequests.RequestNotInQueue.selector, | ||
| 1 | ||
| )); | ||
| _startProcessingBatch(1); | ||
| vm.stopPrank(); | ||
| } | ||
|
|
||
| function test_StartProcessingBatch_RevertRequestNotInQueueWhenQueueNonEmpty() public { | ||
| vm.prank(user); | ||
| uint256 reqId = c.createYieldVault{value: 1 ether}(NATIVE_FLOW, 1 ether, VAULT_ID, STRATEGY_ID); | ||
|
|
||
| vm.startPrank(coa); | ||
| _startProcessingBatch(reqId); | ||
| uint256[] memory successfulRequestIds = new uint256[](5); | ||
| successfulRequestIds[0] = reqId; | ||
| successfulRequestIds[1] = reqId+2; | ||
|
|
||
| vm.expectRevert(FlowYieldVaultsRequests.InvalidRequestState.selector); | ||
| _startProcessingBatch(reqId); | ||
| vm.expectRevert(abi.encodeWithSelector( | ||
| FlowYieldVaultsRequests.RequestNotInQueue.selector, | ||
| reqId+2 | ||
| )); | ||
| c.startProcessingBatch(successfulRequestIds, new uint256[](0)); | ||
| vm.stopPrank(); | ||
| } |
There was a problem hiding this comment.
Missing coverage: re-processing an already-PROCESSING request.
The old test_StartProcessingBatch_RevertNotPending verified that calling startProcessingBatch on a request already in PROCESSING status is rejected. That test was deleted and none of the new tests replace it.
The new enforcement still prevents this (a PROCESSING request has been dequeued, so the queue no longer contains it, and the COA would get RequestNotInQueue or RequestProcessOutOfOrder), but the expectation is no longer documented as a test. Consider adding:
function test_StartProcessingBatch_RevertWhenRequestAlreadyProcessing() public {
vm.prank(user);
uint256 reqId = c.createYieldVault{value: 1 ether}(NATIVE_FLOW, 1 ether, VAULT_ID, STRATEGY_ID);
vm.startPrank(coa);
_startProcessingBatch(reqId); // PENDING → PROCESSING, also dequeued
// Queue is now empty; re-submitting the same ID must revert
vm.expectRevert(abi.encodeWithSelector(
FlowYieldVaultsRequests.RequestNotInQueue.selector,
reqId
));
_startProcessingBatch(reqId);
vm.stopPrank();
}
Closes #25
FLOW-10: FIFO Queue Is Not Enforced on-Chain yet Costs O(n) to Maintain
Introduce an optimized queue data structure (mapping-based with head & tail pointers), to avoid the high gas costs of maintaining the FIFO order on-chain. Both
enqueue&dequeueoperations are now O(1), the cancellation part remain O(n), while the batch drop is now O(m*n) for dropping a batch of m requests.The previous queue data structure required an array and a mapping, and had O(n) performance when removing a request from the pending queue:
In additionn
_startProcessingInternalnow has a check which verifies that the request being processed is the head of therequestsQueueFIFO queue:If that is not the case, the function call reverts with
RequestProcessOutOfOrder.The Cadence
Schedulerthat schedules & processes the requests, is fetching the request IDs with:which returns a given count of pending EVM requests from the queue, in FIFO order.
These are fed to
preprocessRequests(), and after the validation checks, they are classified as successful/rejected, and they are then passed in tostartProcessingBatch(), which drops the rejected request IDs, and calls_startProcessingInternal()for each individual successful request ID.Migration Notes:
With the public
pendingRequestIdsvariable, the getterpendingRequestIds(uint256 index) → uint256is no longer available. Callers should usegetPendingRequestIds(), as is the case with the Cadence side in this repository.